341 research outputs found

    Efficient fetal-maternal ECG signal separation from two channel maternal abdominal ECG via diffusion-based channel selection

    Full text link
    There is a need for affordable, widely deployable maternal-fetal ECG monitors to improve maternal and fetal health during pregnancy and delivery. Based on the diffusion-based channel selection, here we present the mathematical formalism and clinical validation of an algorithm capable of accurate separation of maternal and fetal ECG from a two channel signal acquired over maternal abdomen

    Corporate Governance and Organizational Integrity

    Get PDF

    Hessian-Free High-Resolution Nesterov Acceleration for Sampling

    Full text link
    We propose an accelerated-gradient-based MCMC method. It relies on a modification of the Nesterov's accelerated gradient method for strongly convex functions (NAG-SC): We first reformulate NAG-SC as a Hessian-Free High-Resolution ODE, then release the high-resolution coefficient as a free hyperparameter, and finally inject appropriate noise and discretize the diffusion process. Accelerated sampling enabled by this new hyperparameter is not only experimentally demonstrated on several learning tasks, but also theoretically quantified, both at the continuous level and after discretization. For (not-necessarily-strongly-) convex and LL-smooth potentials, exponential convergence in χ2\chi^2 divergence is proved, with a rate analogous to state-of-the-art results of underdamped Langevin dynamics, plus an additional acceleration. At the same time, the method also works for nonconvex potentials, for which we also establish exponential convergence as long as the potential satisfies a Poincar\'e inequality

    Item-Graph2vec: a Efficient and Effective Approach using Item Co-occurrence Graph Embedding for Collaborative Filtering

    Full text link
    Current item-item collaborative filtering algorithms based on artificial neural network, such as Item2vec, have become ubiquitous and are widely applied in the modern recommender system. However, these approaches do not apply to the large-scale item-based recommendation system because of their extremely long training time. To overcome the shortcoming that current algorithms have high training time costs and poor stability when dealing with large-scale data sets, the item graph embedding algorithm Item-Graph2vec is described here. This algorithm transforms the users' shopping list into a item co-occurrence graph, obtains item sequences through randomly travelling on this co-occurrence graph and finally trains item vectors through sequence samples. We posit that because of the stable size of item, the size and density of the item co-occurrence graph change slightly with the increase in the training corpus. Therefore, Item-Graph2vec has a stable runtime on the large scale data set, and its performance advantage becomes more and more obvious with the growth of the training corpus. Extensive experiments conducted on real-world data sets demonstrate that Item-Graph2vec outperforms Item2vec by 3 times in terms of efficiency on douban data set, while the error generated by the random walk sampling is small

    ON SCALABLE AND FAST LANGEVIN-DYNAMICS-BASED SAMPLING ALGORITHMS

    Get PDF
    Langevin dynamics-based sampling algorithms are arguably among the most widelyused Markov Chain Monte Carlo (MCMC) methods. Two main directions of the modern study of MCMC methods are (i) How to scale MCMC methods to big data applications, and (ii) Tight convergence analysis of MCMC algorithms, with explicit dependence on various characteristics of target distribution, in a non-asymptotic manner. This thesis continues the previous efforts in this two lines and consists of three parts. In the first part, we study stochastic gradient MCMC methods for large scale application. We propose a non-uniform subsampling of gradients scheme to approximately match the transition kernel of a base MCMC base with full gradient, aiming for better sample quality. The demonstration is based on underdamped Langevin dynamics. In the second part, we consider an analog of Nesterov’s accelerated algorithm in optimization for sampling. We derive a dynamics termed Hessian-Free-High-Resolution (HFHR) dynamics, from a high-resolution ordinary differential equation description of the Nesterov’s accelerated algorithm. We then quantify the acceleration of HFHR over underdamped Langevin dynamics at both continuous dynamics level and discrete algorithm level. In the third part, we study a broad family of bounded, contractive-SDE-based sampling algorithms via mean-square analysis. We show how to extend the applicability of classical mean-square analysis from finite time to infinite time. Iteration complexity in 2-Wasserstein distance is also characterized and when applied to Langevin Monte Carlo algorithm, we obtain an improved iteration complexity bound.Ph.D

    Concentration of Data Encoding in Parameterized Quantum Circuits

    Full text link
    Variational quantum algorithms have been acknowledged as a leading strategy to realize near-term quantum advantages in meaningful tasks, including machine learning and combinatorial optimization. When applied to tasks involving classical data, such algorithms generally begin with quantum circuits for data encoding and then train quantum neural networks (QNNs) to minimize target functions. Although QNNs have been widely studied to improve these algorithms' performance on practical tasks, there is a gap in systematically understanding the influence of data encoding on the eventual performance. In this paper, we make progress in filling this gap by considering the common data encoding strategies based on parameterized quantum circuits. We prove that, under reasonable assumptions, the distance between the average encoded state and the maximally mixed state could be explicitly upper-bounded with respect to the width and depth of the encoding circuit. This result in particular implies that the average encoded state will concentrate on the maximally mixed state at an exponential speed on depth. Such concentration seriously limits the capabilities of quantum classifiers, and strictly restricts the distinguishability of encoded states from a quantum information perspective. We further support our findings by numerically verifying these results on both synthetic and public data sets. Our results highlight the significance of quantum data encoding in machine learning tasks and may shed light on future encoding strategies.Comment: 26 pages including appendi

    Impossible Differential Cryptanalysis of SPN Ciphers

    Get PDF
    Impossible differential cryptanalysis is a very popular tool for analyzing the security of modern block ciphers and the core of such attack is based on the existence of impossible differentials. Currently, most methods for finding impossible differentials are based on the miss-in-the-middle technique and they are very ad-hoc. In this paper, we concentrate SPN ciphers whose diffusion layer is defined by a linear transformation PP. Based on the theory of linear algebra, we propose several criteria on PP and its inversion P1P^{-1} to characterize the existence of 3/43/4-round impossible differentials. We further discuss the possibility to extend these methods to analyze 5/65/6-round impossible differentials. Using these criteria, impossible differentials for reduced-round Rijndael are found that are consistent with the ones found before. New 44-round impossible differentials are discovered for block cipher ARIA. And many 44-round impossible differentials are firstly detected for a kind of SPN cipher that employs a 32×3232\times32 binary matrix proposed at ICISC 2006 as its diffusion layer. It is concluded that the linear transformation should be carefully designed in order to protect the cipher against impossible differential cryptanalysis
    corecore